Sampling Strategies for MCMC
نویسندگان
چکیده
There are many good methods for sampling Markov chains via streams of independent U [0, 1] random variables. Recently some non-random and some random but dependent driving sequences have been shown to result in consistent Markov chain sampling, sometimes with considerably improved accuracy. The key to consistent sampling is for the driving sequence to be completely uniformly distributed (CUD) or weakly CUD. This paper gives some sufficient conditions for an infinite sequence to be (W)CUD. The earlier theory did not incorporate acceptance-rejection sampling. We show by a coupling argument that a strategy due to Liao (1998) for inserting IID points into a WCUD sequence leads to consistent sampling. We also introduce a notion of (W)CUD triangular arrays for finite samples, and show that a lattice sampling construction of Niederreiter (1977) produces CUD triangular arrays.
منابع مشابه
Interweaving Markov Chain Monte Carlo Strategies for Efficient Estimation of Dynamic Linear Models
In dynamic linear models (DLMs) with unknown fixed parameters, a standard Markov chain Monte Carlo (MCMC) sampling strategy is to alternate sampling of latent states conditional on fixed parameters and sampling of fixed parameters conditional on latent states. In some regions of the parameter space, this standard data augmentation (DA) algorithm can be inefficient. To improve efficiency, we app...
متن کاملTesting MCMC code
Markov Chain Monte Carlo (MCMC) algorithms are a workhorse of probabilistic modeling and inference, but are difficult to debug, and are prone to silent failure if implemented naı̈vely. We outline several strategies for testing the correctness of MCMC algorithms. Specifically, we advocate writing code in a modular way, where conditional probability calculations are kept separate from the logic of...
متن کاملEfficient Block Sampling Strategies for Sequential Monte Carlo Methods
Sequential Monte Carlo (SMC) methods are a powerful set of simulation-based techniques for sampling sequentially from a sequence of complex probability distributions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, block sampling strategies often perform much better than algorithms based on one-at-a-time sampl...
متن کاملScaling-up Split-Merge MCMC with Locality Sensitive Sampling (LSS)
Split-Merge MCMC (Monte Carlo Markov Chain) is one of the essential and popular variants of MCMC for problems when an MCMC state consists of an unknown number of components. It is well known that state-of-the-art methods for split-merge MCMC do not scale well. Strategies for rapid mixing requires smart and informative proposals to reduce the rejection rate. However, all known smart proposals in...
متن کاملAdaptive Metropolis-Hastings samplers for the Bayesian analysis of large linear Gaussian systems
This paper considers the implementation of efficient Bayesian computation for large linear Gaussian models containing many latent variables. A common approach is to implement a simple MCMC procedure such as the Gibbs sampler or data augmentation, but these methods are often unsatisfactory when the model is large. This motivates the need to develop other strategies for improving MCMC. This paper...
متن کاملGradient-free MCMC methods for dynamic causal modelling
In this technical note we compare the performance of four gradient-free MCMC samplers (random walk Metropolis sampling, slice-sampling, adaptive MCMC sampling and population-based MCMC sampling with tempering) in terms of the number of independent samples they can produce per unit computational time. For the Bayesian inversion of a single-node neural mass model, both adaptive and population-bas...
متن کامل